Non - speech Sounds for User Interface Control
نویسنده
چکیده
Non-speech sounds have already gained interest of the researchers in the HCI community when used in the data sonification and acoustic user interface design, usually as a tool of data presentation. Their complementary role, the use for data input and user interface control by the users who produce them, has been until recently ignored. In the current literature, the use of non-speech sounds has been investigated only in solitaire instances, such as their use to control a particular game or input device. In our recent work, we have demonstrated, that in different scenarios, such as emulation of the mouse device or keyboard, the use of non-speech sounds may successfully compete with other assistive techniques and technologies that allow the motor impaired users to carry out tasks on a standard PC. The non-speech sound input is thus an inexpensive alternative to various costly an uncomfortable systems, such as sip-and-puff controllers or eye trackers. As opposed to the speech recognition techniques, the non-speech sounds may be used by the users with speech difficulties and are languageand culture-independent. This thesis focuses on the control by pitch of the tone. A novel method of the measurement of the comfortable pitch range is suggested. Two methods elementary for accomplishing 1D-pointing tasks are evaluated in a user study: Pointing by means of the absolute pitch tone and pointing by means of acoustic gestures making use of relative tone. Further on, a method for the mouse cursor control is suggested and evaluated in a longitudinal study. The last chapter of this thesis focuses on the use of the pitch control in the computer games. This document demonstrates that the non-speech sounds are a feasible input channel for various applications, especially where a rapid response of the user interface is expected and required, including mouse cursor and video games control.
منابع مشابه
The Use of Non-Speech Sounds in Non-Visual Interfaces to the MS-Windows GUI for Blind Computer Users
Two studies investigated the use of non-speech sounds (auditory icons and earcons) in non-visual interfaces to MS-Windows for blind computer users. The first study presented sounds in isolation and blind and sighted participants rated them for their recognisability, and appropriateness of the mapping between the sound and the interface object/event. As a result, the sounds were revised and inco...
متن کاملDesign and evaluation of prosody-based non-speech audio feedback for physical training application
Methodological support for the design of non-speech user interface sounds for human–computer interaction is still fairly scarce. To meet this challenge, this paper presents a sound design case which, as a practical design solution for a wrist-computer physical training application, outlines a prosody-based method for designing non-speech user interface sounds. The principles used in the design ...
متن کاملUsability of Non-speech Sounds in User Interfaces
We review the literature on the integration of non-speech sounds to visual interfaces and applications from a usability perspective and subsequently recommend which auditory feedback types serve to enhance human interaction with computers by conveying useful and comprehensible information. We present an overview over varied tasks, functions and environments with a view to establishing the best ...
متن کاملIdentifying Function-specific Prosodic Cues for Non-speech User Interface Sound Design
This study explores the potential of utilising certain prosodic qualities of function-specific vocal expressions in order to design effective non-speech user interface sounds. In an empirical setting, utterances with four context-situated communicative functions were produced by 20 participants. Time series of fundamental frequency (F0) and intensity were extracted from the utterances and analy...
متن کاملEffects of Speech and Non-Speech Sounds on Short-Term Memory and Possible Implications for In-Vehicle Use Research paper for the ICAD05 workshop "Combining Speech and Sound in the User Interface"
Using auditory output for presenting non-critical but relevant events to the car driver, we compared the effect of four groups of sounds (two speech, two non-speech) on short-term memory and on response time and accuracy. The results indicate that longer speech messages can disrupt short-term memory performance whereas earcons, auditory icons, and single keywords do not cause this effect. Earco...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2008